8 research outputs found

    Streaming Graph Challenge: Stochastic Block Partition

    Full text link
    An important objective for analyzing real-world graphs is to achieve scalable performance on large, streaming graphs. A challenging and relevant example is the graph partition problem. As a combinatorial problem, graph partition is NP-hard, but existing relaxation methods provide reasonable approximate solutions that can be scaled for large graphs. Competitive benchmarks and challenges have proven to be an effective means to advance state-of-the-art performance and foster community collaboration. This paper describes a graph partition challenge with a baseline partition algorithm of sub-quadratic complexity. The algorithm employs rigorous Bayesian inferential methods based on a statistical model that captures characteristics of the real-world graphs. This strong foundation enables the algorithm to address limitations of well-known graph partition approaches such as modularity maximization. This paper describes various aspects of the challenge including: (1) the data sets and streaming graph generator, (2) the baseline partition algorithm with pseudocode, (3) an argument for the correctness of parallelizing the Bayesian inference, (4) different parallel computation strategies such as node-based parallelism and matrix-based parallelism, (5) evaluation metrics for partition correctness and computational requirements, (6) preliminary timing of a Python-based demonstration code and the open source C++ code, and (7) considerations for partitioning the graph in streaming fashion. Data sets and source code for the algorithm as well as metrics, with detailed documentation are available at GraphChallenge.org.Comment: To be published in 2017 IEEE High Performance Extreme Computing Conference (HPEC

    XLab: Early Indications & Warnings from Open Source Data with Application to Biological Threat

    Get PDF
    XLab is an early warning system that addresses a broad range of national security threats using a flexible, rapidly reconfigurable architecture. XLab enables intelligence analysts to visualize, explore, and query a knowledge base constructed from multiple data sources, guided by subject matter expertise codified in threat model graphs. This paper describes a novel system prototype that addresses threats arising from biological weapons of mass destruction. The prototype applies knowledge extraction analytics-”including link estimation, entity disambiguation, and event detection-”to build a knowledge base of 40 million entities and 140 million relationships from open sources. Exact and inexact subgraph matching analytics enable analysts to search the knowledge base for instances of modeled threats. The paper introduces new methods for inexact matching that accommodate threat models with temporal and geospatial patterns. System performance is demonstrated using several simplified threat models and an embedded scenario

    GraphChallenge.org: Raising the Bar on Graph Analytic Performance

    Full text link
    The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems. GraphChallenge.org provides a wide range of pre-parsed graph data sets, graph generators, mathematically defined graph algorithms, example serial implementations in a variety of languages, and specific metrics for measuring performance. Graph Challenge 2017 received 22 submissions by 111 authors from 36 organizations. The submissions highlighted graph analytic innovations in hardware, software, algorithms, systems, and visualization. These submissions produced many comparable performance measurements that can be used for assessing the current state of the art of the field. There were numerous submissions that implemented the triangle counting challenge and resulted in over 350 distinct measurements. Analysis of these submissions show that their execution time is a strong function of the number of edges in the graph, NeN_e, and is typically proportional to Ne4/3N_e^{4/3} for large values of NeN_e. Combining the model fits of the submissions presents a picture of the current state of the art of graph analysis, which is typically 10810^8 edges processed per second for graphs with 10810^8 edges. These results are 3030 times faster than serial implementations commonly used by many graph analysts and underscore the importance of making these performance benefits available to the broader community. Graph Challenge provides a clear picture of current graph analysis systems and underscores the need for new innovations to achieve high performance on very large graphs.Comment: 7 pages, 6 figures; submitted to IEEE HPEC Graph Challenge. arXiv admin note: text overlap with arXiv:1708.0686

    Probabilistic Fault Tree Analysis

    Get PDF
    Presentatio

    Technologies for Reliable AI Test and Evaluation

    No full text
    Artificial intelligence (AI) is revolutionizing many industries, while at the same time facing challenges to safe and reliable use such as vulnerability to adversarial attacks and data drift. Although many AI test and evaluation (T&E) tools exist, integrating them is difficult. Under a program funded by the Chief Digital and AI Office (CDAO), we are developing a library to simplify the AI T&E process by providing user- and developer-friendly interfaces for composing T&E workflows. We illustrate the effectiveness of this approach with an example that compares clean and perturbed accuracy of two models on a computer vision dataset
    corecore